Goto

Collaborating Authors

 logic tensor network


Enhancing Symbolic Machine Learning by Subsymbolic Representations

Roth, Stephen, Baur, Lennart, Boer, Derian, Kramer, Stefan

arXiv.org Artificial Intelligence

The goal of neuro-symbolic AI is to integrate symbolic and subsymbolic AI approaches, to overcome the limitations of either. Prominent systems include Logic Tensor Networks (LTN) or DeepProbLog, which offer neural predicates and end-to-end learning. The versatility of systems like LTNs and DeepProbLog, however, makes them less efficient in simpler settings, for instance, for discriminative machine learning, in particular in domains with many constants. Therefore, we follow a different approach: We propose to enhance symbolic machine learning schemes by giving them access to neural embeddings. In the present paper, we show this for TILDE and embeddings of constants used by TILDE in similarity predicates. The approach can be fine-tuned by further refining the embeddings depending on the symbolic theory. In experiments in three real-world domain, we show that this simple, yet effective, approach outperforms all other baseline methods in terms of the F1 score. The approach could be useful beyond this setting: Enhancing symbolic learners in this way could be extended to similarities between instances (effectively working like kernels within a logical language), for analogical reasoning, or for propositionalization. Keywords: Neuro-symbolic AI TILDE Inductive Logic Programming Neural Embeddings Logic Tensor Networks


LTNtorch: PyTorch Implementation of Logic Tensor Networks

Carraro, Tommaso, Serafini, Luciano, Aiolli, Fabio

arXiv.org Artificial Intelligence

Logic Tensor Networks (LTN) is a Neuro-Symbolic framework that effectively incorporates deep learning and logical reasoning. In particular, LTN allows defining a logical knowledge base and using it as the objective of a neural model. This makes learning by logical reasoning possible as the parameters of the model are optimized by minimizing a loss function composed of a set of logical formulas expressing facts about the learning task. The framework learns via gradient-descent optimization. Fuzzy logic, a relaxation of classical logic permitting continuous truth values in the interval [0,1], makes this learning possible. Specifically, the training of an LTN consists of three steps. Firstly, (1) the training data is used to ground the formulas. Then, (2) the formulas are evaluated, and the loss function is computed. Lastly, (3) the gradients are back-propagated through the logical computational graph, and the weights of the neural model are changed so the knowledge base is maximally satisfied. LTNtorch is the fully documented and tested PyTorch implementation of Logic Tensor Networks. This paper presents the formalization of LTN and how LTNtorch implements it. Moreover, it provides a basic binary classification example.


Sisteme Hibride de Invatare Automata si Aplicatii

Hogea, Eduard, Onchis, Darian

arXiv.org Artificial Intelligence

In this paper, a deep neural network approach and a neuro-symbolic one are proposed for classification and regression. The neuro-symbolic predictive models based on Logic Tensor Networks are capable of discriminating and in the same time of explaining the characterization of bad connections, called alerts or attacks, and of normal connections. The proposed hybrid systems incorporate both the ability of deep neural networks to improve on their own through experience and the interpretability of the results provided by symbolic artificial intelligence approach. To justify the need for shifting towards hybrid systems, explanation, implementation, and comparison of the dense neural network and the neuro-symbolic network is performed in detail. For the comparison to be relevant, the same datasets were used in training and the metrics resulted have been compared. A review of the resulted metrics shows that while both methods have similar precision in their predictive models, with Logic Tensor Networks being also possible to have interactive accuracy and deductive reasoning over data. Other advantages and disadvantages such as overfitting mitigation and scalability issues are also further discussed.


Injecting Prior Knowledge for Transfer Learning into Reinforcement Learning Algorithms using Logic Tensor Networks

Badreddine, Samy, Spranger, Michael

arXiv.org Machine Learning

Human ability at solving complex tasks is helped by priors on object and event semantics of their environment. This paper investigates the use of similar prior knowledge for transfer learning in Reinforcement Learning agents. In particular, the paper proposes to use a first-order-logic language grounded in deep neural networks to represent facts about objects and their semantics in the real world. Facts are provided as background knowledge a priori to learning a policy for how to act in the world. The priors are injected with the conventional input in a single agent architecture. As proof-of-concept, the paper tests the system in simple experiments that show the importance of symbolic abstraction and flexible fact derivation. The paper shows that the proposed system can learn to take advantage of both the symbolic layer and the image layer in a single decision selection module.